35 research outputs found

    Combining ICA Representations for Recognizing Faces

    Get PDF
    Independent Component Analysis (ICA) is a generalization of Principal Component Analysis (PCA), and it looks for components that are both statistically independent and non-Gaussian. ICA is sensitive to high-order statistic and it expected to outperform PCA in finding better basis images. Moreover, with face recognition, high-order relationships among pixels may have more important information than those of pairwise relationships on which base images found by PCA depend. Two different representations can be applied by ICA; ICA architecture I and ICA architecture II. A new classifier that combines the two ICA architectures is proposed for face recognition. By the new classifier, the similarity measure vector was employed in which the similarity measure vectors for both ICA representations were resorted in descending order and then integrated by merging the corresponding values of each vector. The new classifier was performed on face images in the AR Face Database. Cumulative Match Characteristic was taken as a measure for evaluating the performance of the new classifier with illumination variation, expression, and Occlusion. The proposed classifier outperforms both ICA architectures in all cases especially in later ranks

    Reconstruction of 3D faces by shape estimation and texture interpolation

    Get PDF
    This paper aims to address the ill-posed problem of reconstructing 3D faces from single 2D face images. An extended Tikhonov regularization method is connected with the standard 3D morphable model in order to reconstruct the 3D face shapes from a small set of 2D facial points. Further, by interpolating the input 2D texture with the model texture and warping the interpolated texture to the reconstructed face shapes, 3D face reconstruction is achieved. For the texture warping, the 2D face deformation has been learned from the model texture using a set of facial landmarks. Our experimental results justify the robustness of the proposed approach with respect to the reconstruction of realistic 3D face shapes

    Adaptive Pca-Based Models To Reconstruct 3d Faces From Single 2d Images

    Get PDF
    Example-based statistical face models using Principle Component Analysis (PCA) have been widely used for 3D face reconstruction and face recognition. The main concern of this thesis is to improve the accuracy and the efficiency of the PCA-based 3D face shape reconstruction. More precisely, this thesis addresses the challenge of increasing the Representational Power (RP) of the PCA-based model in accordance with the encouraging results of the conducted empirical study. A limited set of training data is utilized towards enhancing the accuracy of 3D reconstruction. Concerning the empirical study, it examines the effect of phenomenal factors (i.e. size of the training set and the variation of the selected training examples) on the RP of 3D PCA-based face models. A regularized 3D face reconstruction algorithm has also been examined to find out how common factors such as the regularization matrix, the number of feature points, and the regularization parameter l affect the accuracy of the 3D face reconstruction based on the PCA model. Importantly, an adaptive PCA-based model is proposed to increase the RP of the 3D face reconstruction model by deforming a set of examples in the training dataset. By adding these deformed samples together with the original training samples, it has been shown that the improvement in the RP can be achieved. Comprehensive experimental validations have been carried out to demonstrate that the proposed model considerably improves the RP of the standard PCA-based model with reduced face shape reconstruction errors

    A comparative Study of Sorting Algorithms Comb, Cocktail and Counting Sorting

    Get PDF
    The sorting algorithms problem is probably one of the most famous problems that used in abroad variety of application. There are many techniques to solve the sorting problem. In this paper, we conduct a comparative study to evaluate the performance of three algorithms; comb, cocktail and count sorting algorithms in terms of execution time. Java programing is used to implement the algorithms using numeric data on the same platform conditions. Among the three algorithms, we found out that the cocktail algorithm has the shortest execution time; while counting sort comes in the second order. Furthermore, Comb comes in the last order in term of execution time. Future effort will investigate the memory space complexity

    Performance Comparison of Simulated Annealing, GA and ACO Applied to TSP

    Get PDF
    The travelling salesman problem (TSP) is probably one of the most famous problems in combinatorial optimization. There are many techniques to solve the TSP problem such as Ant Colony Optimization (ACO), Genetic Algorithm (GA) and Simulated Annealing (SA).In this paper, we conduct a comparison study to evaluate the performance of these three algorithms in terms of execution time and shortest distance. JAVA programing is used to implement the algorithms using three benchmarks on the same platform conditions. Among the three algorithms, we found out that the Simulated Annealing has the shortest time in execution(<1s) but for the shortest distance, it comes in the second order. Furthermore, in term of shortest distance between the cities, ACO performs better than GA and SA. However, ACO comes in the last order in term of time execution

    Enhancing FP-Growth Performance Using Multi-threading based on Comparative Study

    Get PDF
    The time required for generating frequent patterns plays an important role in mining association rules, especially when there exist a large number of patterns and/or long patterns. Association rule mining has been focused as a major challenge within the field of data mining in research for over a decade. Although tremendous progress has been made, algorithms still need improvements since databases are growing larger and larger. In this research we present a performance comparison between two frequent pattern extraction algorithms implemented in Java, they are the Recursive Elimination (RElim) and FP-Growth, these algorithms are used in finding frequent itemsets in the transaction database. We found that FP-growth outperformed RElim in term of execution time. In this context, multithreading is used to enhance the time efficiency of FP-growth algorithm. The results showed that multithreaded FP-growth is more efficient compared to single threaded FP-growth

    Reconstructing 3D face shapes from single 2D images using an adaptive deformation model

    Get PDF
    The Representational Power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. In this contribution, a novel approach is proposed to increase the RP of the 3D reconstruction PCA-based model by deforming a set of examples in the training dataset. By adding these deformed samples together with the original training samples we gain more RP. A 3D PCA-based model is adapted for each new input face image by deforming 3D faces in the training data set. This adapted model is used to reconstruct the 3D face shape for the given input 2D near frontal face image. Our experimental results justify that the proposed adaptive model considerably improves the RP of the conventional PCA-based model

    Adaptive face modelling for reconstructing 3D face shapes from single 2D images

    Get PDF
    Example-based statistical face models using principle component analysis (PCA) have been widely deployed for three-dimensional (3D) face reconstruction and face recognition. The two common factors that are generally concerned with such models are the size of the training dataset and the selection of different examples in the training set. The representational power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. The RP of the model can be increased by correspondingly increasing the number of training samples. In this contribution, a novel approach is proposed to increase the RP of the 3D face reconstruction model by deforming a set of examples in the training dataset. A PCA-based 3D face model is adapted for each new near frontal input face image to reconstruct the 3D face shape. Further an extended Tikhonov regularisation method has been

    تمييز الاشخاص من خلال الصور المغطاة جزئيا باستخدام تنظيم تحليل المركبات المستقلة

    No full text
    Face recognition approaches that use subspace projection are heavily related to basis images, especially in the case of partial occlusion. To improve the recognition performance, the occlusion should be excluded from the test image during the recognition process. In terms of similarity with image reconstruction, the proposed approach aims at representing the whole face image based on facial subregion. In this respect, face representation can be considered as an inverse problem. The Tikhonov regularization approach is combined with independent component analysis (ICA) in order to obtain the image parameter from the occluded image, where this parameter is compared with that trained by ICA. The combined algorithm is named as RegICA and is performed on face images in the AR Face Database. Cumulative match characteristics was taken as a measure for evaluating the performance of RegICA with occlusion. Compared with ICA on facial occlusion problem, it was found that the proposed approach outperforms ICA. In addition, it has the ability to recognize faces using any facial subregion even if it is small. Furthermore, it is shown that RegICA is not time-consuming, and it outperforms some of the recent approaches in terms of accuracy.ترتبط مناهج التعرف على الوجوه التي تستخدم إسقاط الفضاء الجزئي ارتباطًا وثيقًا بالصور الأساسية، خاصة في حالة الوجوه المغطاة جزئيا في الصورة. لتحسين أداء التعرف على الوجوه، يجب استبعاد الجزء المغطى من الصورة أثناء عملية التعرف. من حيث التشابه مع إعادة بناء الصورة الوجهية، تهدف المنهجية المقترحة إلى تمثيل صورة الوجه بالكامل باستخدام منطقة جزئية للوجه. في هذا الصدد، يمكن اعتبار تمثيل الوجه مشكلة معكوس رياضي. يتم دمج نهج تنظيم تيخونوف مع تحليل المركب المستقل (ICA) من أجل الحصول على خصائص الصورة من الصورة المغطاة ، حيث تتم مقارنة هذه الخصائص بتلك التي تم تدريبها بواسطة (ICA). تسمى الخوارزمية المدمجة باسم RegICA ويتم إجراؤها على صور الوجه في قاعدة بيانات (AR Face). تم استخدام خصائص المطابقة التراكمية (CMC) كمقياس لتقييم أداء RegICA مع الصور المغطاة جزئيا. مقارنة مع ICA ، وجد أن النهج المقترح يتفوق على (ICA). بالإضافة إلى ذلك، لديه القدرة على التعرف على الوجوه باستخدام أي منطقة فرعية للوجه حتى لو كانت صغيرة. علاوة على ذلك ، يتضح أن RegICA لا تستغرق وقتًا طويلاً ، وتتفوق على بعض الأساليب الحديثة من حيث الدقة

    دراسة مقارنة بين خوارزميات تقنيتي DCT و DWT مع استخدام ترميز هوفمان

    No full text
    Image compression techniques have been widely used to store and transmit data which requires storage space and high transfer speed. The explosive growth of high-quality photos leads to the requirement of efficient technique to store and exchange data over the internet. In this paper, we present a comparative study to compare between the Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) algorithms in combination with Huffman algorithm; DCT-H and DWT-H. The comparison is based on five factors: Compression Ratio (CR), Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and compression/decompression time. The experiments are conducted on five BMP gray-scale file images. We found out that DWT-H coding is comparable to DCT-H coding in term of CR and outperforms DCT-H in terms of MSE, PSNR and SSIM. The CR average results of the five test images for DCT-H and DWT-H are 2.36 and 3.17, respectively. Moreover, DCT-H has the average results of MSE = 13.19, PSNR = 37.15 and SSIM = 0.76, while WDT-H has the average results of MSE = 4.54, PSNR = 42.5 and SSIM = 0.85. On the other hand, DCT-H outperforms DWT-H in term of execution time for compression and decompression. DCT-H has an average compression time of 0.358s and an average decompression time of 0.122s, while WDT-H has 2.38s compression time and 2.13s decompression time.لقد شاع استخدام تقنيات ضغط الصور لتخزين البيانات ونقلها، الأمر الذي يتطلب حيزا تخزينيا كبيرا وسرعة نقل عالية. ويؤدي النمو السريع للصور عالية الجودة إلى تنامي الطلب على تقنيات فعالة لتخزين البيانات وتبادلها عبر الانترنت. في هذه الورقة البحثية، نقدم دراسة مقارنة بين خوارزميات تقنيتي DCT و DWT مع استخدام ترميز هوفمان. والمقارنة في هذه الدراسة مبنية على خسمة عوامل: معدل الضغط، ومتوسط مربع الخطأ (MSE)، وأعلى نسب الأشارة إلى الضجيج (PSNR)، ومقياس عامل التشابه البنيوي (SSIM)، وزمن الضغط/ازالة الضغط
    corecore